Proxmox VE 存储方案对比

最后更新于:2025-11-21 19:22:34

Virtualized Storage Architectures on Proxmox VE: A Comprehensive Comparative Analysis of TrueNAS, OpenMediaVault, XigmaNAS, and Rockstor

Proxmox VE 上的虚拟化存储架构:TrueNAS、OpenMediaVault、XigmaNAS 和 Rockstor 的综合对比分析

1. Executive Summary

1. 执行摘要

The convergence of hypervisor computing and software-defined storage (SDS) has fundamentally altered the landscape of homelab and enterprise data management. This report provides an exhaustive technical evaluation of deploying four primary Network Attached Storage (NAS) operating systems—TrueNAS (SCALE/Core), OpenMediaVault (OMV), XigmaNAS, and Rockstor—as virtual machines within the Proxmox Virtual Environment (Proxmox VE). The analysis is predicated on the fundamental tension between the convenience of virtualization (abstraction) and the rigorous requirements of storage reliability (hardware directness).

English:

The research indicates a distinct bifurcation in deployment philosophy. TrueNAS and XigmaNAS, leveraging the ZFS filesystem, demand strict hardware adherence, effectively mandating PCI Passthrough of storage controllers (HBAs) to guarantee data integrity and functional self-healing. Conversely, OpenMediaVault (OMV), utilizing standard Linux filesystems (Ext4/XFS), exhibits superior adaptability to virtualized storage layers (VirtIO), offering a performance advantage of approximately 18% in specific virtualized benchmarks due to reduced overhead. Rockstor occupies a precarious middle ground, offering the advanced features of Btrfs but suffering from maturity issues in virtualized RAID implementations and documentation warnings regarding virtual disk identifiers. This report definitively categorizes TrueNAS SCALE (with HBA passthrough) as the standard for critical data integrity, and OpenMediaVault (on VirtIO-SCSI) as the optimal solution for converged, resource-efficient deployments.

Chinese:

研究表明,在部署理念上存在明显的分歧。TrueNAS 和 XigmaNAS 利用 ZFS 文件系统,要求严格的硬件依附性,实际上强制要求存储控制器(HBA)的 PCI 直通,以保证数据完整性和功能性自我修复。相反,利用标准 Linux 文件系统(Ext4/XFS)的 OpenMediaVault (OMV) 对虚拟化存储层(VirtIO)表现出卓越的适应性,由于开销减少,在特定的虚拟化基准测试中提供了大约 18% 的性能优势。Rockstor 处于一个不稳定的中间地带,虽然提供了 Btrfs 的高级功能,但在虚拟化 RAID 实现的成熟度以及关于虚拟磁盘标识符的文档警告方面存在问题。本报告明确将 TrueNAS SCALE(配合 HBA 直通)归类为关键数据完整性的标准,将 OpenMediaVault(基于 VirtIO-SCSI)归类为融合、资源高效型部署的最佳解决方案。

2. Architectural Foundations: The Hypervisor I/O Stack

2. 架构基础:Hypervisor I/O 堆栈

To accurately compare these operating systems, one must first dissect the environment in which they operate. Proxmox VE is based on KVM (Kernel-based Virtual Machine) and QEMU. The manner in which storage requests traverse from the Guest OS (the NAS) to the physical media dictates performance reliability and data safety.

English:

In a virtualized storage appliance, the "I/O Path" is the critical vector of analysis. When a Guest OS issues a write command, it typically traverses the Guest's filesystem, the Guest's virtual disk driver, the Hypervisor's user-space emulator (QEMU), the Host's filesystem or block layer, and finally the physical disk controller. This rigorous chain introduces latency and "context switching" overhead. The efficiency of this path varies significantly between the two primary methods of disk presentation available in Proxmox: VirtIO Paravirtualization and PCI Passthrough.

Chinese:

在虚拟化存储设备中,“I/O 路径”是分析的关键向量。当客户操作系统发出写入命令时,它通常会经过客户机的文件系统、客户机的虚拟磁盘驱动程序、Hypervisor 的用户空间仿真器 (QEMU)、主机的文件系统或块层,最后到达物理磁盘控制器。这一严格的链条引入了延迟和“上下文切换”开销。在 Proxmox 中可用的两种主要磁盘呈现方法之间,即 VirtIO 半虚拟化和 PCI 直通,该路径的效率存在显著差异。

2.1 The VirtIO Abstraction Layer and VirtIO-SCSI

2.1 VirtIO 抽象层与 VirtIO-SCSI

English:

VirtIO represents a standardized interface for virtual machines to access simplified "virtual" devices, such as block devices and network adapters. Rather than emulating legacy hardware (like an IDE controller), which requires the guest to perform costly register-level operations that the hypervisor must intercept and translate, VirtIO devices are "enlightened." The guest OS knows it is running in a virtual environment and cooperates with the hypervisor.

Research confirms that VirtIO-SCSI is the superior controller emulation for NAS workloads compared to VirtIO-Block (virtio-blk). While virtio-blk is slightly more efficient for simple throughput, virtio-scsi supports advanced SCSI command sets, including UNMAP (discard/TRIM), which is vital for maintaining thin provisioning on the host. Furthermore, the virtio-scsi-single controller type in Proxmox enables the use of IOThreads. By assigning a dedicated thread to the disk controller, Proxmox decouples storage I/O processing from the main QEMU execution loop. This prevents heavy storage operations in the NAS VM from blocking other VM activities, a critical optimization for multi-tenant homelabs.1

Chinese:

VirtIO 代表了一种标准接口,用于虚拟机访问简化的“虚拟”设备,如块设备和网络适配器。VirtIO 设备不是模拟传统硬件(如 IDE 控制器)——这需要客户机执行昂贵的寄存器级操作,而 Hypervisor 必须拦截并翻译这些操作——而是“被启发的(enlightened)”。客户操作系统知道它在虚拟环境中运行,并与 Hypervisor 合作。

研究证实,与 VirtIO-Block (virtio-blk) 相比,VirtIO-SCSI 是 NAS 工作负载的更优控制器仿真。虽然 virtio-blk 在简单吞吐量方面稍微高效一些,但 virtio-scsi 支持高级 SCSI 命令集,包括 UNMAP (discard/TRIM),这对维护主机的精简配置至关重要。此外,Proxmox 中的 virtio-scsi-single 控制器类型支持使用 IOThreads。通过为磁盘控制器分配专用线程,Proxmox 将存储 I/O 处理与主 QEMU 执行循环解耦。这防止了 NAS 虚拟机中的繁重存储操作阻塞其他虚拟机的活动,这是多租户家庭实验室的关键优化 1。

2.2 The Imperative of PCI Passthrough (IOMMU)

2.2 PCI 直通 (IOMMU) 的必要性

English:

For filesystems like ZFS (used by TrueNAS and XigmaNAS), the abstraction of VirtIO is fundamentally problematic. ZFS is an "end-to-end" integrity system designed to manage the raw physical geometry of the disk. It expects to handle bad sector reallocation, flush caches directly to non-volatile media, and read S.M.A.R.T data to predict failure.

When a disk is presented via VirtIO (Disk Passthrough), QEMU presents a sanitized logical block device to the guest. The guest cannot see the physical serial number reliably, nor can it directly query the hardware health status. Consequently, PCI Passthrough is the architectural requirement for these systems. This utilizes the host CPU's IOMMU (Input-Output Memory Management Unit) to map the physical PCI address of a Storage Controller (HBA) directly into the memory space of the VM. The Guest OS loads its own native driver (e.g., the LSI HBA driver) and interacts with the card as if it were bare metal, bypassing the hypervisor's storage stack entirely.4

Chinese:

对于像 ZFS(由 TrueNAS 和 XigmaNAS 使用)这样的文件系统,VirtIO 的抽象在根本上是有问题的。ZFS 是一个“端到端”的完整性系统,旨在管理磁盘的原始物理几何结构。它期望处理坏扇区重新分配,将缓存直接刷新到非易失性介质,并读取 S.M.A.R.T 数据以预测故障。

当通过 VirtIO(磁盘直通)呈现磁盘时,QEMU 向客户机呈现经过净化的逻辑块设备。客户机无法可靠地看到物理序列号,也不能直接查询硬件健康状态。因此,PCI 直通是这些系统的架构要求。这利用主机 CPU 的 IOMMU(输入输出内存管理单元)将存储控制器 (HBA) 的物理 PCI 地址直接映射到虚拟机的内存空间中。客户操作系统加载其自己的原生驱动程序(例如 LSI HBA 驱动程序),并像裸机一样与卡交互,完全绕过 Hypervisor 的存储堆栈 4。

3. Candidate Analysis: TrueNAS (SCALE & Core)

3. 候选分析:TrueNAS (SCALE & Core)

TrueNAS is widely regarded as the premier enterprise storage solution. However, its architecture—heavily reliant on OpenZFS—is notoriously hostile to improper virtualization. The transition from the FreeBSD-based "Core" to the Linux-based "SCALE" has introduced new capabilities but also new complexities in resource management.

English:

The cardinal rule of virtualizing TrueNAS, reiterated across every technical forum and white paper, is the absolute prohibition of virtual disks for the main data pool. Snippet 5 explicitly warns: "DO NOT use hard drive passthrough... to get around a lack of decent PCIe-Passthrough support." The risk involves the ZFS Intent Log (ZIL). If the virtualization layer (Proxmox) claims a write is "committed" to disk (stored in a volatile buffer) but power is lost before it hits the physical platter, ZFS—which believes the data is safe—can suffer catastrophic pool corruption. This "write hole" is mitigated only by giving ZFS direct control over the HBA.5

Installation best practices dictate a hybrid approach: A small VirtIO virtual disk (typically 16-32GB) should be used for the TrueNAS boot drive. This resides on the Proxmox host's SSD (often NVMe), benefiting from the host's speed and simplifying boot drive redundancy (snapshotting the VM file). The data drives, conversely, must be attached to a physical HBA passed through to the VM.4

Chinese:

虚拟化 TrueNAS 的首要规则,在每个技术论坛和白皮书中都被反复重申,就是绝对禁止将虚拟磁盘用于主数据池。片段 5 明确警告:“不要使用硬盘直通……来规避缺乏良好 PCIe 直通支持的问题。” 风险涉及 ZFS 意图日志 (ZIL)。如果虚拟化层 (Proxmox) 声称写入已“提交”到磁盘(存储在易失性缓冲区中),但在到达物理盘片之前断电,认为数据安全的 ZFS 可能会遭受灾难性的池损坏。这种“写入漏洞”只能通过让 ZFS 直接控制 HBA 来缓解 5。

安装最佳实践规定了一种混合方法:应使用一个小的 VirtIO 虚拟磁盘(通常为 16-32GB)作为 TrueNAS 的引导驱动器。这驻留在 Proxmox 主机的 SSD(通常是 NVMe)上,受益于主机的速度并简化引导驱动器的冗余(对虚拟机文件进行快照)。相反,数据驱动器必须连接到直通给虚拟机的物理 HBA 4。

3.1 The Memory War: ARC vs. Host RAM

3.1 内存之战:ARC 与主机 RAM

English:

A significant operational challenge with TrueNAS SCALE in a VM is memory management. ZFS uses the Adaptive Replacement Cache (ARC), which aggressively caches reads in RAM to boost performance. On bare metal, ZFS will consume nearly all available free RAM and release it when applications request it. However, inside a VM, the Guest OS (TrueNAS) does not know that the Host (Proxmox) might need that RAM for other VMs.

Furthermore, TrueNAS SCALE (Linux-based) imposes a default limit on ARC of 50% of the guest's assigned RAM. This is a safety measure to ensure Docker (Kubernetes) containers and KVM processes running inside TrueNAS do not starve. For a dedicated storage VM, this is inefficient; assigning 64GB of RAM to the VM results in only 32GB being used for caching. Advanced users must employ init scripts (via the GUI "Post Init" command) to force zfs_arc_max to a higher value, e.g., echo 53687091200 > /sys/module/zfs/parameters/zfs_arc_max to set a 50GB limit. This manual tuning is non-standard and highlights the friction of running an appliance OS inside a hypervisor.7

Chinese:

TrueNAS SCALE 在虚拟机中的一个主要操作挑战是内存管理。ZFS 使用自适应替换缓存 (ARC),它会积极地将读取内容缓存在 RAM 中以提高性能。在裸机上,ZFS 会消耗几乎所有可用的空闲 RAM,并在应用程序请求时释放它。然而,在虚拟机内部,客户操作系统 (TrueNAS) 不知道主机 (Proxmox) 可能需要该 RAM 用于其他虚拟机。

此外,TrueNAS SCALE(基于 Linux)对 ARC 施加了客户机分配 RAM 的 50% 的默认限制。这是一项安全措施,旨在确保在 TrueNAS 内部运行的 Docker (Kubernetes) 容器和 KVM 进程不会资源匮乏。对于专用存储虚拟机,这效率低下;分配 64GB RAM 给虚拟机导致仅 32GB 用于缓存。高级用户必须使用初始化脚本(通过 GUI “Post Init” 命令)将 zfs_arc_max 强制设置为更高的值,例如,echo 53687091200 > /sys/module/zfs/parameters/zfs_arc_max 以设置 50GB 限制。这种手动调优是非标准的,突显了在 Hypervisor 内部运行设备操作系统的摩擦 7。

3.2 QEMU Guest Agent Integration

3.2 QEMU 客户代理集成

English:

Integration with the hypervisor is managed via the QEMU Guest Agent (QGA). TrueNAS SCALE, being Debian-based, includes the agent by default. This allows Proxmox to see the IP address of the NAS and, crucially, to issue clean shutdown commands. Unlike FreeBSD-based versions where agent installation can be manual and buggy, SCALE provides a seamless "out-of-the-box" experience regarding hypervisor power management.10

Chinese:

与 Hypervisor 的集成是通过 QEMU 客户代理 (QGA) 管理的。TrueNAS SCALE 基于 Debian,默认包含该代理。这允许 Proxmox 查看 NAS 的 IP 地址,至关重要的是,可以发出干净的关机命令。与 FreeBSD 版本(代理安装可能需要手动且容易出错)不同,SCALE 在 Hypervisor 电源管理方面提供了无缝的“开箱即用”体验 10。

4. Candidate Analysis: OpenMediaVault (OMV)

4. 候选分析:OpenMediaVault (OMV)

OpenMediaVault (OMV) represents a fundamentally different architectural philosophy. While TrueNAS is an "appliance" (a rigid, pre-configured firmware), OMV is essentially a package set running on top of standard Debian Linux. This decoupling allows it to be far more adaptable to the virtualization layer.

English:

OMV is the "Gold Standard" for flexibility. Because it does not enforce the use of ZFS (though it supports it via plugin), it functions exceptionally well with VirtIO Virtual Disks. The Linux kernel's Ext4 and XFS filesystems are robust enough to handle the abstraction of a virtual block device without the "write hole" anxieties that plague ZFS. This allows a Proxmox administrator to manage storage at the host level (e.g., creating a large LVM thin pool or ZFS pool on the host) and simply carve out virtual disk files (.qcow2 or .raw) for OMV.

Performance benchmarks cited in the research suggest that this lightweight approach can yield significant dividends. In specific file management operations, OMV has been observed to outperform TrueNAS by approximately 18.73%.12 This performance delta is attributed to the lower CPU overhead of Ext4/XFS compared to the checksum-heavy, compression-enabled ZFS pipeline, combined with the efficiency of the Linux-on-Linux KVM virtualization stack.

Chinese:

OMV 是灵活性的“黄金标准”。因为它不强制使用 ZFS(尽管它通过插件支持 ZFS),所以它在 VirtIO 虚拟磁盘上运行得非常好。Linux 内核的 Ext4 和 XFS 文件系统足够健壮,可以处理虚拟块设备的抽象,而没有困扰 ZFS 的“写入漏洞”焦虑。这允许 Proxmox 管理员在主机级别管理存储(例如,在主机上创建一个大型 LVM 精简池或 ZFS 池),并简单地为 OMV 划分出虚拟磁盘文件(.qcow2 或 .raw)。

研究中引用的性能基准测试表明,这种轻量级方法可以带来显著的红利。在特定的文件管理操作中,观察到 OMV 的性能比 TrueNAS 高出约 18.73% 12。这种性能差异归因于 Ext4/XFS 相比校验和繁重、启用压缩的 ZFS 管道具有更低的 CPU 开销,以及 Linux-on-Linux KVM 虚拟化堆栈的效率。

4.1 Optimizing VirtIO-SCSI for OMV

4.1 为 OMV 优化 VirtIO-SCSI

English:

To maximize OMV performance, specific configurations on the virtual disk controller are required. The use of VirtIO-SCSI over VirtIO-Block is mandatory to support the Discard (TRIM) command. Without TRIM, a virtual disk file (like a qcow2 image) will grow indefinitely as data is written, even if files are deleted inside the OMV guest. The guest OS must issue discard commands to tell the host to free up the underlying blocks.

Configuration Checklist for OMV on Proxmox:

Controller: Set to VirtIO-SCSI Single.

Disk Option: Enable Discard and SSD Emulation.

IO Thread: Enable IO Thread to offload processing.
Failure to enable Discard results in "storage bloat," where a 1TB virtual disk consumes 1TB of physical space even if it only contains 50GB of data.2

Chinese:

为了最大化 OMV 性能,需要在虚拟磁盘控制器上进行特定配置。必须使用 VirtIO-SCSI 而非 VirtIO-Block 以支持 Discard (TRIM) 命令。如果没有 TRIM,虚拟磁盘文件(如 qcow2 镜像)将随着数据的写入无限增长,即使在 OMV 客户机内部删除了文件。客户操作系统必须发出 discard 命令以告知主机释放底层块。

Proxmox 上 OMV 的配置清单:

控制器: 设置为 VirtIO-SCSI Single。

磁盘选项: 启用 Discard 和 SSD 仿真。

IO 线程: 启用 IO Thread 以分卸处理。
未能启用 Discard 会导致“存储膨胀”,即 1TB 的虚拟磁盘即使只包含 50GB 数据,也会消耗 1TB 的物理空间 2。

5. Candidate Analysis: XigmaNAS (FreeBSD Legacy)

5. 候选分析:XigmaNAS (FreeBSD 遗产)

XigmaNAS (formerly NAS4Free) is a continuation of the original FreeNAS code base, prioritizing lightness and efficiency over the feature-rich (and bloat-heavy) direction taken by TrueNAS. It runs on FreeBSD.

English:

XigmaNAS is undeniably efficient, capable of running ZFS on hardware with significantly fewer resources than TrueNAS SCALE requires.13 However, its reliance on FreeBSD creates significant friction in a Proxmox (Linux KVM) environment. The primary failure mode involves the QEMU Guest Agent and backup consistency.

When Proxmox initiates a backup of a running VM, it instructs the Guest Agent to "freeze" the filesystem (fs-freeze) to ensure all data is flushed to disk, creating a consistent snapshot. Research indicates that on FreeBSD guests (like XigmaNAS and pfSense), this command frequently causes the VM to hang or timeout, failing the backup job. The accepted workaround—disabling the "Run QEMU Guest Agent" flag or unchecking "Freeze/Thaw" in backup settings—results in "crash-consistent" backups rather than "application-consistent" backups. This means a restored backup might look like the system lost power suddenly, potentially requiring a filesystem check (fsck) or ZFS scrub upon boot.14

Chinese:

XigmaNAS 无疑是高效的,能够在比 TrueNAS SCALE 所需资源少得多的硬件上运行 ZFS 13。然而,它对 FreeBSD 的依赖在 Proxmox (Linux KVM) 环境中产生了巨大的摩擦。主要的故障模式涉及 QEMU 客户代理和备份一致性。

当 Proxmox 启动对运行中虚拟机的备份时,它会指示客户代理“冻结”文件系统 (fs-freeze) 以确保所有数据都刷新到磁盘,从而创建一致的快照。研究表明,在 FreeBSD 客户机(如 XigmaNAS 和 pfSense)上,此命令经常导致虚拟机挂起或超时,导致备份任务失败。可接受的变通方法——禁用“运行 QEMU 客户代理”标志或在备份设置中取消选中“冻结/解冻”——会导致“崩溃一致性”备份,而不是“应用程序一致性”备份。这意味着恢复的备份可能看起来像系统突然断电,可能需要在启动时进行文件系统检查 (fsck) 或 ZFS 清理 14。

5.1 The Backup Client Compiling Struggle

5.1 备份客户端编译的挣扎

English:

A further limitation is the inability to easily run the Proxmox Backup Server (PBS) client. While OMV and TrueNAS (Linux) can easily install the proxmox-backup-client binary to perform granular file-level backups, XigmaNAS users are left stranded. There is no official FreeBSD port for the client. Users attempting to compile the Rust-based client from source on FreeBSD face dependency hell and frequent compilation failures.17 This forces XigmaNAS users to rely on full-VM block-level backups (which are large) or traditional rsync, losing the deduplication magic of PBS.

Chinese:

进一步的限制是无法轻松运行 Proxmox Backup Server (PBS) 客户端。虽然 OMV 和 TrueNAS (Linux) 可以轻松安装 proxmox-backup-client 二进制文件以执行细粒度的文件级备份,但 XigmaNAS 用户却束手无策。该客户端没有官方的 FreeBSD 移植版。试图在 FreeBSD 上从源代码编译基于 Rust 的客户端的用户面临依赖地狱和频繁的编译失败 17。这迫使 XigmaNAS 用户依赖全虚拟机块级备份(体积巨大)或传统的 rsync,失去了 PBS 的重复数据删除魔力。

6. Candidate Analysis: Rockstor (Btrfs Focus)

6. 候选分析:Rockstor (Btrfs 焦点)

Rockstor is unique in its exclusive focus on the Btrfs filesystem, offering a "ZFS-like" feature set (snapshots, checksums, raid) on a Linux (openSUSE) base.

English:

Rockstor presents a paradox in virtualization. It is Linux-based (good for KVM compatibility) but relies on Btrfs, a filesystem that is notoriously sensitive to the underlying hardware presentation. The Rockstor documentation explicitly warns about the use of Virtual Disks. Btrfs tracks devices by UUID and Serial Number. In some virtualization scenarios, if the disk configuration changes or if the XML definition of the VM is altered, the "Virtual Serial Number" of the disk might change. This can cause Rockstor to identify the pool as foreign or broken, leading to a detached pool state that requires manual command-line intervention to fix.19

Furthermore, while Btrfs supports RAID5/6 profiles, the implementation is famously unstable ("The Write Hole" issue remains unsolved in Btrfs RAID5/6). Running a virtualized RAID5 Btrfs pool on Rockstor is considered highly risky. Additionally, "Scrubbing" a Btrfs pool on virtual disks has been shown to be incredibly slow due to the I/O overhead, with users reporting scrub speeds dropping to 16MB/s on capable hardware due to the virtualization bottleneck.21

Chinese:

Rockstor 在虚拟化方面呈现出一个悖论。它是基于 Linux 的(利于 KVM 兼容性),但依赖于 Btrfs,这是一种对底层硬件呈现出了名敏感的文件系统。Rockstor 文档明确警告关于 虚拟磁盘 的使用。Btrfs 通过 UUID 和序列号跟踪设备。在某些虚拟化场景中,如果磁盘配置发生变化或虚拟机的 XML 定义被修改,磁盘的“虚拟序列号”可能会改变。这可能导致 Rockstor 将池识别为外来的或损坏的,导致池分离状态,需要手动命令行干预来修复 19。

此外,虽然 Btrfs 支持 RAID5/6 配置,但其实现在业界以不稳定著称(Btrfs RAID5/6 中的“写入漏洞”问题仍未解决)。在 Rockstor 上运行虚拟化的 RAID5 Btrfs 池被认为是极具风险的。此外,在虚拟磁盘上“清理(Scrubbing)” Btrfs 池已被证明由于 I/O 开销而极其缓慢,用户报告称由于虚拟化瓶颈,在有能力的硬件上清理速度降至 16MB/s 21。

7. Ecosystem Integration: Backups and The Proxmox Backup Server (PBS)

7. 生态系统集成:备份与 Proxmox Backup Server (PBS)

A holistic view of storage must include backup strategy. Proxmox Backup Server (PBS) provides deduplicated, incremental backups. The ability of the NAS Guest to interact with PBS is a decisive factor.

English:

OpenMediaVault shines here. Being standard Debian, installing the PBS client is trivial (apt install proxmox-backup-client). This allows OMV to push backups of specific folders to the PBS server, bypassing the hypervisor entirely.

TrueNAS SCALE allows this via a "hack." Users can enable the apt package manager (normally disabled) to install the client. While effective, allowing direct dataset backups to PBS, this is officially unsupported by iXsystems and could theoretically be wiped by a system update.22

XigmaNAS is effectively locked out of this ecosystem due to the FreeBSD incompatibility mentioned earlier.

Rockstor, being openSUSE (RPM-based), does not have a native .deb package for the client, but the static binary provided by Proxmox can theoretically run on it, though integration is manual and lacks a GUI plugin.25

Chinese:

OpenMediaVault 在这方面表现出色。作为标准的 Debian,安装 PBS 客户端非常简单 (apt install proxmox-backup-client)。这允许 OMV 将特定文件夹的备份推送到 PBS 服务器,完全绕过 Hypervisor。

TrueNAS SCALE 通过一种“黑客手段”允许这样做。用户可以启用 apt 包管理器(通常是禁用的)来安装客户端。虽然有效,允许将数据集直接备份到 PBS,但这在官方上不受 iXsystems 支持,理论上可能会被系统更新擦除 22。

XigmaNAS 由于前面提到的 FreeBSD 不兼容性,实际上被排除在该生态系统之外。

Rockstor 作为 openSUSE(基于 RPM),没有用于客户端的原生 .deb 包,但 Proxmox 提供的静态二进制文件理论上可以在其上运行,尽管集成是手动的且缺乏 GUI 插件 25。

8. Comparative Synthesis and Decision Matrix

8. 综合比较与决策矩阵

The following technical comparison synthesizes the architectural findings.

9. Conclusion

9. 结论

The choice of a virtualized NAS operating system on Proxmox VE is not merely a preference of interface, but a fundamental architectural decision regarding data safety and resource allocation.

English:

TrueNAS SCALE represents the Performance and Integrity Apex. It is the only choice for users who prioritize data safety above all else and possess the hardware (IOMMU-capable motherboard and HBA) to support it. The requirement for physical passthrough makes it less flexible—migration of the VM to another node requires moving the physical card or identical hardware—but it guarantees that ZFS operates as designed.

OpenMediaVault represents the Virtualization Native. It is the pragmatic choice for 90% of homelab users. By leveraging the efficiency of Linux filesystems on VirtIO-SCSI, it delivers faster performance with significantly lower resource overhead. It decouples the storage from the physical hardware, allowing the NAS VM to be migrated, snapshotted, and backed up using standard Proxmox tools without fear of corrupting a ZFS pool.

XigmaNAS and Rockstor have become specialized tools. XigmaNAS is best reserved for severely resource-constrained hardware where TrueNAS is too heavy, provided the user accepts the backup limitations. Rockstor remains a niche for those specifically requiring Btrfs features, but its fragility in virtualized environments makes it difficult to recommend for general production use.

Chinese:

在 Proxmox VE 上选择虚拟化 NAS 操作系统不仅仅是界面的偏好,而是关于数据安全和资源分配的根本架构决策。

TrueNAS SCALE 代表了性能和完整性的顶点。对于那些将数据安全置于首位并拥有支持它的硬件(支持 IOMMU 的主板和 HBA)的用户来说,它是唯一的选择。物理直通的要求使其灵活性降低——将虚拟机迁移到另一个节点需要移动物理卡或拥有相同的硬件——但它保证了 ZFS 按设计运行。

OpenMediaVault 代表了虚拟化原生方案。它是 90% 家庭实验室用户的务实选择。通过利用 Linux 文件系统在 VirtIO-SCSI 上的效率,它以显著降低的资源开销提供了更快的性能。它将存储与物理硬件解耦,允许使用标准 Proxmox 工具对 NAS 虚拟机进行迁移、快照和备份,而无需担心损坏 ZFS 池。

XigmaNAS 和 Rockstor 已成为专用工具。XigmaNAS 最好保留用于 TrueNAS 过于沉重的严重资源受限的硬件,前提是用户接受备份限制。Rockstor 仍然是那些专门需要 Btrfs 功能的用户的利基市场,但其在虚拟化环境中的脆弱性使其难以推荐用于一般生产用途。

9.1 Final Recommendations

9.1 最终建议

For Critical Data (The "Vault"): Deploy TrueNAS SCALE. Pass through an LSI HBA. Assign static RAM (disable ballooning). Use the "unsupported" script to install Proxmox Backup Client for dataset backups.

针对关键数据(“金库”):部署 TrueNAS SCALE。直通 LSI HBA。分配静态 RAM(禁用 ballooning)。使用“不受支持”的脚本安装 Proxmox Backup Client 以进行数据集备份。

For General Storage (The "Utility"): Deploy OpenMediaVault. Use Proxmox to manage the physical storage (ZFS/LVM). Create VirtIO-SCSI disks with Discard enabled. Enjoy the flexibility of VM migration and fast backups.

针对通用存储(“工具”):部署 OpenMediaVault。使用 Proxmox 管理物理存储 (ZFS/LVM)。创建启用了 Discard 的 VirtIO-SCSI 磁盘。享受虚拟机迁移和快速备份的灵活性。

Works cited

Perfomance Benchmarking IDE vs SATA vs VirtIO vs VirtIO SCSI (Local-LVM, NFS, CIFS/SMB) with Windows 10 VM : r/Proxmox - Reddit, accessed November 21, 2025,

Proxmox VE Administration Guide, accessed November 21, 2025,

VirtIO vs SCSI - Proxmox Support Forum, accessed November 21, 2025,

Best practices new virtualized installation in Proxmox - TrueNAS, accessed November 21, 2025,

Resource - "Absolutely must virtualize TrueNAS!" ... a guide to not completely losing your data., accessed November 21, 2025,

Disk Passthrough vs HBA Passthrough for virtualized TrueNAS : r/Proxmox - Reddit, accessed November 21, 2025,

TrueNAS SCALE VM + ZFS cache memory usage, accessed November 21, 2025,

Scale 50% use of ARC | TrueNAS Community, accessed November 21, 2025,

TrueNAS Scale ARC size is limited to 50% of RAM by default. But why? - Reddit, accessed November 21, 2025,

Qemu Guest Agent in TrueNAS SCALE?, accessed November 21, 2025,

Qemu Guest Agent installation for newbie - TrueNAS Community Forums, accessed November 21, 2025,

Data Management Software - TrueNAS vs. OpenMediaVault NAS Performance Review, accessed November 21, 2025,

Home NAS OS and configuration advices : r/HomeNAS - Reddit, accessed November 21, 2025,

Freeze on FreeBSD VM running in PVE 9 - Proxmox Support Forum, accessed November 21, 2025,

Proxmox VE Administration Guide, accessed November 21, 2025,

Disable fs-freeze on snapshot backups - Proxmox Support Forum, accessed November 21, 2025,

Volume backup file level - Proxmox Support Forum, accessed November 21, 2025,

Cannot backup VM's or Containers : r/Proxmox - Reddit, accessed November 21, 2025,

Failed to mount Pool(tank) due to an unknown reason. Command used ['/usr/bin/mount', '/dev/disk/by-label/tank', '/mnt2/tank'] - Rockstor Community Forum, accessed November 21, 2025,

Cannot remove "detached" disks - Support - Rockstor Community Forum, accessed November 21, 2025,

Are scrubs supposed to take this long? - Rockstor Community Forum, accessed November 21, 2025,

Backup Proxmox to TrueNAS Scale NAS? - Reddit, accessed November 21, 2025,

Backing up TrueNAS SCALE datasets to Proxmox Backup Server - YouTube, accessed November 21, 2025,

Backup Client Usage — Proxmox Backup 4.0.11-2 documentation, accessed November 21, 2025,

Installation — Proxmox Backup 4.0.11-2 documentation, accessed November 21, 2025,